GITA

Extreme Computer Science

Extreme programming explained describes extreme programming as a software engineering discipline which organizes humans in order to produce higher-quality software in a more productive way. In this article, we analyze eXtreme Programming (XP) from the perspective of security engineering, in order to evaluate to what extent this approach could be used to develop critical software in a secure manner. XP takes the concept to an extreme, writing automated tests (sometimes within software modules) that verify the workings of even tiny sections of the software code, instead of testing just larger functions.
Functional testing means producing test data covering the programs input, output, and functions. Module tests are run when completed using the correct, extreme, and incorrect data. Boundary data is the data that lies at the border between a programs answer and the next.
Normal testing data should generate no errors, and any calculations that a program makes based on those values should be checked for accuracy. Test data should take advantage of all paths that are taken by a program, forcing each instruction of a program to execute. Tests are created for every requirement in the XP project, and when this is done, the only thing left is developing code that successfully passes these tests. When developers have developed good unit tests for a piece of software, they can easily refactor the code to run more tests when needed. Developers often ship the software, which allows them to get feedback on production, and to refine the product according to the new requirements/feedbacks given. Pair Programming means two individuals in a development team working together in a single system while developing any piece of production code. The method, developed by Kent Beck, calls for pair developers working together, performing automated unit tests, and frequent code changes in order to make things simpler. The difference between extreme programming and more normal systems-development techniques is a focus on designing and writing code for the needs of today, rather than those of tomorrow, the following week, or next month.
The most advanced techniques of software engineering will be applied to make sure the developed and deployed software meets the highest standards necessary to guarantee correctness and productivity of DoEs scientific code. Development of systems operating in clinical, research, HPC, Big Data/AI environments will require new, transparent techniques, while delivery of the state-of-the-art in healthcare data science and in-silico technologies will require exascale computing resources.
After all, in the commercial world, ideas can be developed and tested on scales that are not feasible in academic settings, using really big data and unique, massively scaled infrastructures for both hardware and software. To achieve these goals, I worked closely with a number of applications and researchers on the large scale, to jointly design the key software infrastructures for those communities. Today, Apple and the other cloud services companies dominate both the computing hardware and software ecosystem, in terms of scale and technological approach. Extremely large-scale computation and data has become indispensable for data-driven, computation-intensive science and engineering, promising dramatically new insights about natural and engineered systems. Two years ago, a $16-billion insurance company in London launched a four-year, bespoke development project to integrate data from its legacy systems into a call centre using traditional development techniques. Advanced computer science requires a non-recurring engineering (NRE) investment in developing new technologies and systems. The project will tackle disruptive technological shifts stemming from emerging trends in high-end computing, large data sets, AI, and increasingly heterogeneous architectures, such as neuromorphic and quantum computing systems. This long-term program is designed to catalyze breakthroughs on this scientific frontier, by convening leading innovators and pioneers in applied mathematics (scientific computation, optimization, data analysis, statistics, and more).
To accomplish these things, we will need to invest in hiring and supporting teams of researchers--chip designers, systems software developers, and packaging engineers--in a holistic integrated fashion, and to build opportunities that will make working on science and engineering problems more attractive intellectually. Fourth, we will need to create actual prototypes of hardware and software on scale, not merely increments, but ones that really validate new ideas using specialized silicon and associated software.